5 research outputs found

    Performance of Integrated IoT Network with Hybrid mmWave/FSO/THz Backhaul Link

    Full text link
    Establishing end-to-end connectivity of Internet of Things (IoT) network with the core for collecting sensing data from remote and hard-to-reach terrains is a challenging task. In this article, we analyze the performance of an IoT network integrated with wireless backhaul link for data collection. We propose a solution that involves a self-configuring protocol for aggregate node (AN) selection in an IoT network, which sends the data packet to an unmanned aerial vehicle (UAV) over radio frequency (RF) channels. We adopt a novel hybrid transmission technique for wireless backhaul employing opportunistic selections combining (OSC) and maximal ratio combining (MRC) that simultaneously transmits the data packet on mmWave (mW), free space optical (FSO), and terahertz (THz) technologies to take advantage of their complementary characteristics. We employ the decode-and-forward (DF) protocol to integrate the IoT and backhaul links and provide physical layer performance assessment using outage probability and average bit-error-rate (BER) under diverse channel conditions. We also develop simplified expressions to gain a better understanding of the system's performance at high signal-to-noise ratio (SNR). We provide computer simulations to compare different wireless backhaul technologies under various channel and SNR scenarios and demonstrate the performance of the data collection using the integrated link.Comment: This work has been submitted to IEEE for possible publicatio

    Lets Make India Great Again: The Effect of China’s Currency Manipulation on India’s Balance of Payments

    No full text
    During the 1980’s, India experienced a sudden decline in its Balance of Payments (BOP), which led the country to increase its national debt. The growing fiscal debt led to a decline in foreign investments in India, as the country was soon on the brink of a financial crisis. India experienced a severe depreciation during the 1991 financial crisis. Our study examines China’s foreign exchange intervention during the period 1980-2000, as we suspect that the People’s Bank of China’s (PBoC) currency manipulation coupled with the growing fiscal debt triggered the 1991 financial crisis. In our study we hypothesize that China’s currency manipulation had a negative effect on India’s BOP. In order to examine the BOP, we individualize the effect on the Capital and Current account. According to theory, the BOP should always equal zero, however, we are analyzing the Current and Capital account portions of the BOP, without taking into account the official reserves settlement account that balances out any surplus or deficit. We derive the Mundell-Fleming IS-LM model in order to theorize China’s currency manipulation. The five empirical articles that we analyze and evaluate help us derive our conceptual and operational model. Since we utilize a time series data set, we test for unit roots, and other tests such as multicollinearity, serial correlation and heteroskedasticity. The results of our empirical testing indicate that China’s currency manipulation had a significant effect on India’s Capital account only. Therefore, we prove our hypothesis, but with weak results

    Beyond the imitation game: Quantifying and extrapolating the capabilities of language models

    No full text
    Language models demonstrate both quantitative improvement and new qualitative capabilities with increasing scale. Despite their potentially transformative impact, these new capabilities are as yet poorly characterized. In order to inform future research, prepare for disruptive new model capabilities, and ameliorate socially harmful effects, it is vital that we understand the present and near-future capabilities and limitations of language models. To address this challenge, we introduce the Beyond the Imitation Game benchmark (BIG-bench). BIG-bench currently consists of 204 tasks, contributed by 442 authors across 132 institutions. Task topics are diverse, drawing problems from linguistics, childhood development, math, common-sense reasoning, biology, physics, social bias, software development, and beyond. BIG-bench focuses on tasks that are believed to be beyond the capabilities of current language models. We evaluate the behavior of OpenAI's GPT models, Google-internal dense transformer architectures, and Switch-style sparse transformers on BIG-bench, across model sizes spanning millions to hundreds of billions of parameters. In addition, a team of human expert raters performed all tasks in order to provide a strong baseline. Findings include: model performance and calibration both improve with scale, but are poor in absolute terms (and when compared with rater performance); performance is remarkably similar across model classes, though with benefits from sparsity; tasks that improve gradually and predictably commonly involve a large knowledge or memorization component, whereas tasks that exhibit "breakthrough" behavior at a critical scale often involve multiple steps or components, or brittle metrics; social bias typically increases with scale in settings with ambiguous context, but this can be improved with prompting

    Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models

    Get PDF
    Language models demonstrate both quantitative improvement and new qualitative capabilities with increasing scale. Despite their potentially transformative impact, these new capabilities are as yet poorly characterized. In order to inform future research, prepare for disruptive new model capabilities, and ameliorate socially harmful effects, it is vital that we understand the present and near-future capabilities and limitations of language models. To address this challenge, we introduce the Beyond the Imitation Game benchmark (BIG-bench). BIG-bench currently consists of 204 tasks, contributed by 442 authors across 132 institutions. Task topics are diverse, drawing problems from linguistics, childhood development, math, common-sense reasoning, biology, physics, social bias, software development, and beyond. BIG-bench focuses on tasks that are believed to be beyond the capabilities of current language models. We evaluate the behavior of OpenAI's GPT models, Google-internal dense transformer architectures, and Switch-style sparse transformers on BIG-bench, across model sizes spanning millions to hundreds of billions of parameters. In addition, a team of human expert raters performed all tasks in order to provide a strong baseline. Findings include: model performance and calibration both improve with scale, but are poor in absolute terms (and when compared with rater performance); performance is remarkably similar across model classes, though with benefits from sparsity; tasks that improve gradually and predictably commonly involve a large knowledge or memorization component, whereas tasks that exhibit "breakthrough" behavior at a critical scale often involve multiple steps or components, or brittle metrics; social bias typically increases with scale in settings with ambiguous context, but this can be improved with prompting.Comment: 27 pages, 17 figures + references and appendices, repo: https://github.com/google/BIG-benc
    corecore